fiducial marker
Robot Soccer Kit: Omniwheel Tracked Soccer Robots for Education
Passault, Gregoire, Gaspard, Clement, Ly, Olivier
--Recent developments of low cost off-the-shelf programmable components, their modularity, and also rapid prototyping made educational robotics flourish, as it is accessible in most schools today. They allow to illustrate and embody theoretical problems in practical and tangible applications, and gather multidisciplinary skills. They also give a rich natural context for project-oriented pedagogy. However, most current robot kits all are limited to egocentric aspect of the robots perception. This makes it difficult to access more high-level problems involving e.g. In this paper we introduce an educational holonomous robot kit that comes with an external tracking system, which lightens the constraint on embedded systems, but allows in the same time to discover high-level aspects of robotics, otherwise unreachable. Educational robotics is a field promoting the use of robots as tools to engage learners on practical applications, problems, and sometime competitions. This approach can be backed up by constructionist and experimental learning theories. A lot of educational robotics platforms recently emerged and are now used in classrooms.
- Europe > France > Nouvelle-Aquitaine > Gironde > Bordeaux (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Education (1.00)
- Leisure & Entertainment > Sports > Soccer (0.51)
Fiducial Marker Splatting for High-Fidelity Robotics Simulations
High-fidelity 3D simulation is critical for training mobile robots, but its traditional reliance on mesh-based representations often struggle in complex environments, such as densely packed greenhouses featuring occlusions and repetitive structures. Recent neural rendering methods, like Gaussian Splatting (GS), achieve remarkable visual realism but lack flexibility to incorporate fiducial markers, which are essential for robotic localization and control. W e propose a hybrid framework that combines the photorealism of GS with structured marker representations. Our core contribution is a novel algorithm for efficiently generating GS-based fiducial markers (e.g., AprilT ags) within cluttered scenes. Experiments show that our approach outperforms traditional image-fitting techniques in both efficiency and pose-estimation accuracy. W e further demonstrate the framework's potential in a greenhouse simulation. This agricultural setting serves as a challenging testbed, as its combination of dense foliage, similar-looking elements, and occlusions pushes the limits of perception, thereby highlighting the framework's value for real-world applications.
- North America > United States > Oklahoma > Beaver County (0.04)
- Europe > Switzerland (0.04)
- Asia > Singapore (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots > Locomotion (0.35)
Latent Representations for Visual Proprioception in Inexpensive Robots
Sheikholeslami, Sahara, Bölöni, Ladislau
Abstract: Robotic manipulation requires explicit or implicit knowledge of the robot's joint positions. Precise proprioception is standard in high-quality industrial robots but is often unavailable in inexpensive robots operating in unstructured environments. In this paper, we ask: to what extent can a fast, single-pass regression architecture perform visual proprioception from a single external camera image, available even in the simplest manipulation settings? We explore several latent representations, including CNNs, V AEs, ViTs, and bags of uncalibrated fiducial markers, using fine-tuning techniques adapted to the limited data available. We evaluate the achievable accuracy through experiments on an inexpensive 6-DoF robot. 1 Introduction Proprioception is the task of recovering the configuration of the robot from its own sensors, in contrast to perception, which is directed towards the external reality. In some settings, proprioception is an engineering problem solved by the internal sensors of the robot. For instance, high-quality industrial robots are so precisely actuated that we can safely consider their joint configurations known. However, for certain scenarios, such as inexpensive robots operating in unstructured environments, the proprioception information coming from the robot might be noisy, uncertain, or unreliable. These robots might be controlled through policies based on end-to-end reinforcement learning or imitation learning that define actions as functions of an external observation a π (o), which appears to sidestep the proprioception problem. In practice, however, if some internal proprioception is available, this can be combined with the results of the external perception, in the hope that explicit proprioceptive data can support task performance.
- North America > United States > Florida > Orange County > Orlando (0.04)
- Europe > Spain > Galicia > Madrid (0.04)
Robust Visual Servoing under Human Supervision for Assembly Tasks
Fernandez-Ayala, Victor Nan, Silva, Jorge, Guo, Meng, Dimarogonas, Dimos V.
We propose a framework enabling mobile manipulators to reliably complete pick-and-place tasks for assembling structures from construction blocks. The picking uses an eye-in-hand visual servoing controller for object tracking with Control Barrier Functions (CBFs) to ensure fiducial markers in the blocks remain visible. An additional robot with an eye-to-hand setup ensures precise placement, critical for structural stability. We integrate human-in-the-loop capabilities for flexibility and fault correction and analyze robustness to camera pose errors, proposing adapted barrier functions to handle them. Lastly, experiments validate the framework on 6-DoF mobile arms.
vS-Graphs: Integrating Visual SLAM and Situational Graphs through Multi-level Scene Understanding
Tourani, Ali, Ejaz, Saad, Bavle, Hriday, Morilla-Cabello, David, Sanchez-Lopez, Jose Luis, Voos, Holger
Current Visual Simultaneous Localization and Mapping (VSLAM) systems often struggle to create maps that are both semantically rich and easily interpretable. While incorporating semantic scene knowledge aids in building richer maps with contextual associations among mapped objects, representing them in structured formats like scene graphs has not been widely addressed, encountering complex map comprehension and limited scalability. This paper introduces visual S-Graphs (vS-Graphs), a novel real-time VSLAM framework that integrates vision-based scene understanding with map reconstruction and comprehensible graph-based representation. The framework infers structural elements (i.e., rooms and corridors) from detected building components (i.e., walls and ground surfaces) and incorporates them into optimizable 3D scene graphs. This solution enhances the reconstructed map's semantic richness, comprehensibility, and localization accuracy. Extensive experiments on standard benchmarks and real-world datasets demonstrate that vS-Graphs outperforms state-of-the-art VSLAM methods, reducing trajectory error by an average of 3.38% and up to 9.58% on real-world data. Furthermore, the proposed framework achieves environment-driven semantic entity detection accuracy comparable to precise LiDAR-based frameworks using only visual features. A web page containing more media and evaluation outcomes is available on https://snt-arg.github.io/vsgraphs-results/.
- North America > United States > New York > Monroe County > Rochester (0.04)
- Europe > Spain > Galicia > Madrid (0.04)
- Europe > Spain > Aragón > Zaragoza Province > Zaragoza (0.04)
- Europe > Italy > Lazio > Rome (0.04)
Unveiling the Potential of iMarkers: Invisible Fiducial Markers for Advanced Robotics
Tourani, Ali, Avsar, Deniz Isinsu, Bavle, Hriday, Sanchez-Lopez, Jose Luis, Lagerwall, Jan, Voos, Holger
Fiducial markers are widely used in various robotics tasks, facilitating enhanced navigation, object recognition, and scene understanding. Despite their advantages for robots and Augmented Reality (AR) applications, they often disrupt the visual aesthetics of environments because they are visible to humans, making them unsuitable for non-intrusive use cases. To address this gap, this paper presents "iMarkers"-innovative, unobtrusive fiducial markers detectable exclusively by robots equipped with specialized sensors. These markers offer high flexibility in production, allowing customization of their visibility range and encoding algorithms to suit various demands. The paper also introduces the hardware designs and software algorithms developed for detecting iMarkers, highlighting their adaptability and robustness in the detection and recognition stages. Various evaluations have demonstrated the effectiveness of iMarkers compared to conventional (printed) and blended fiducial markers and confirmed their applicability in diverse robotics scenarios.
- North America > United States > Oklahoma > Beaver County (0.04)
- Europe > Spain > Galicia > Madrid (0.04)
Marker Track: Accurate Fiducial Marker Tracking for Evaluation of Residual Motions During Breath-Hold Radiotherapy
Fiducial marker positions in projection image of cone-beam computed tomography (CBCT) scans have been studied to evaluate daily residual motion during breath-hold radiation therapy. Fiducial marker migration posed challenges in accurately locating markers, prompting the development of a novel algorithm that reconstructs volumetric probability maps of marker locations from filtered gradient maps of projections. This guides the development of a Python-based algorithm to detect fiducial markers in projection images using Meta AI's Segment Anything Model 2 (SAM 2). Retrospective data from a pancreatic cancer patient with two fiducial markers were analyzed. The three-dimensional (3D) marker positions from simulation computed tomography (CT) were compared to those reconstructed from CBCT images, revealing a decrease in relative distances between markers over time. Fiducial markers were successfully detected in 2777 out of 2786 projection frames. The average standard deviation of superior-inferior (SI) marker positions was 0.56 mm per breath-hold, with differences in average SI positions between two breath-holds in the same scan reaching up to 5.2 mm, and a gap of up to 7.3 mm between the end of the first and beginning of the second breath-hold. 3D marker positions were calculated using projection positions and confirmed marker migration. This method effectively calculates marker probability volume and enables accurate fiducial marker tracking during treatment without requiring any specialized equipment, additional radiation doses, or manual initialization and labeling. It has significant potential for automatically assessing daily residual motion to adjust planning margins, functioning as an adaptive radiation therapy tool.
- Europe > Sweden > Stockholm > Stockholm (0.05)
- North America > United States > Texas > Dallas County > Dallas (0.04)
- North America > United States > Ohio > Cuyahoga County > Cleveland (0.04)
- (4 more...)
Optimal Fiducial Marker Placement for Satellite Proximity Operations Using Observability Gramians
Andrews, Nicholas B., Morgansen, Kristi A.
This paper investigates optimal fiducial marker placement on the surface of a satellite performing relative proximity operations with an observer satellite. The absolute and relative translation and attitude equations of motion for the satellite pair are modeled using dual quaternions. The observability of the relative dual quaternion system is analyzed using empirical observability Gramian methods. The optimal placement of a fiducial marker set, in which each marker gives simultaneous optical range and attitude measurements, is determined for the pair of satellites. A geostationary flyby between the observing body (chaser) and desired (target) satellites is numerically simulated and the optimal fiducial placement sets of five and ten on the surface of the desired satellite are solved. It is shown that the optimal solution maximizes the distance between fiducial markers and selects marker locations that are most sensitive to measuring changes in the state during the nonlinear trajectory, despite being visible for less time than other candidate marker locations. Definitions and properties of quaternions and dual quaternions, and parallels between the two, are presented alongside the relative motion model.
- North America > United States > Washington > King County > Seattle (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Virginia > Fairfax County > Reston (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
Integrating Vision Systems and STPA for Robust Landing and Take-Off in VTOL Aircraft
Banik, Sandeep, Kim, Jinrae, Hovakimyan, Naira, Carlone, Luca, Thomas, John P., Leveson, Nancy G.
Vertical take-off and landing (VTOL) unmanned aerial vehicles (UAVs) are versatile platforms widely used in applications such as surveillance, search and rescue, and urban air mobility. Despite their potential, the critical phases of take-off and landing in uncertain and dynamic environments pose significant safety challenges due to environmental uncertainties, sensor noise, and system-level interactions. This paper presents an integrated approach combining vision-based sensor fusion with System-Theoretic Process Analysis (STPA) to enhance the safety and robustness of VTOL UAV operations during take-off and landing. By incorporating fiducial markers, such as AprilTags, into the control architecture, and performing comprehensive hazard analysis, we identify unsafe control actions and propose mitigation strategies. Key contributions include developing the control structure with vision system capable of identifying a fiducial marker, multirotor controller and corresponding unsafe control actions and mitigation strategies. The proposed solution is expected to improve the reliability and safety of VTOL UAV operations, paving the way for resilient autonomous systems.
- North America > United States > Illinois > Champaign County > Urbana (0.15)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Transportation > Air (1.00)
- Aerospace & Defense > Aircraft (1.00)
- Government > Regional Government > North America Government > United States Government (0.46)
A Collaborative Team of UAV-Hexapod for an Autonomous Retrieval System in GNSS-Denied Maritime Environments
Lee, Seungwook, Azhari, Maulana Bisyir, Kang, Gyuree, Günes, Ozan, Han, Donghun, Shim, David Hyunchul
Abstract-- We present an integrated UAV-hexapod robotic system designed for GNSS-denied maritime operations, capable of autonomous deployment and retrieval of a hexapod robot via a winch mechanism installed on a UAV. This system is intended to address the challenges of localization, control, and mobility in dynamic maritime environments. Experimental results demonstrate the effectiveness of this system in real-world scenarios, validating its performance during field tests in both controlled and operational conditions in the MBZIRC 2023 Maritime Challenge. I. INTRODUCTION Unmanned Aerial Vehicles (UAVs) have become an essential component of modern robotics, widely used in various applications, including surveillance, inspection, search and Figure 1: UAV-Hexapod system executing its mission in a rescue, and transportation. Their ability to fly over challenging GNSS-denied maritime environment. Team KAIST won 2nd terrains and access remote areas has expanded the place in the MBZIRC 2023 Maritime Challenge.
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > Spain > Galicia > Madrid (0.04)
- Asia > South Korea > Daejeon > Daejeon (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- Information Technology (0.48)
- Aerospace & Defense > Aircraft (0.34)